Goto

Collaborating Authors

 torque sensor


Reinforcement Learning for Robotic Safe Control with Force Sensing

Lin, Nan, Zhang, Linrui, Chen, Yuxuan, Chen, Zhenrui, Zhu, Yujun, Chen, Ruoxi, Wu, Peichen, Chen, Xiaoping

arXiv.org Artificial Intelligence

-- For the task with complicated manipulation in unstructured environments, traditional hand-coded methods are ineffective, while reinforcement learning can provide more general and useful policy. Although the reinforcement learning is able to obtain impressive results, its stability and reliability is hard to guarantee, which would cause the potential safety threats. Besides, the transfer from simulation to real-world also will lead in unpredictable situations. T o enhance the safety and reliability of robots, we introduce the force and haptic perception into reinforcement learning. We demonstrate that the force-based reinforcement learning method can be more adaptive to environment, especially in sim-to-real transfer . Experimental results show in object pushing task, our strategy is safer and more efficient in both simulation and real world, thus it holds prospects for a wide variety of robotic applications.


Contact Sensing via Joint Torque Sensors and a Force/Torque Sensor for Legged Robots

Grinberg, Jared, Ding, Yanran

arXiv.org Artificial Intelligence

This paper presents a method for detecting and localizing contact along robot legs using distributed joint torque sensors and a single hip-mounted force-torque (FT) sensor using a generalized momentum-based observer framework. We designed a low-cost strain-gauge-based joint torque sensor that can be installed on every joint to provide direct torque measurements, eliminating the need for complex friction models and providing more accurate torque readings than estimation based on motor current. Simulation studies on a floating-based 2-DoF robot leg verified that the proposed framework accurately recovers contact force and location along the thigh and shin links. Through a calibration procedure, our torque sensor achieved an average 96.4% accuracy relative to ground truth measurements. Building upon the torque sensor, we performed hardware experiments on a 2-DoF manipulator, which showed sub-centimeter contact localization accuracy and force errors below 0.2 N.


A Tightly Coupled IMU-Based Motion Capture Approach for Estimating Multibody Kinematics and Kinetics

Osman, Hassan, de Kanter, Daan, Boelens, Jelle, Kok, Manon, Seth, Ajay

arXiv.org Artificial Intelligence

Inertial Measurement Units (IMUs) enable portable, multibody motion capture (MoCap) in diverse environments beyond the laboratory, making them a practical choice for diagnosing mobility disorders and supporting rehabilitation in clinical or home settings. However, challenges associated with IMU measurements, including magnetic distortions and drift errors, complicate their broader use for MoCap. In this work, we propose a tightly coupled motion capture approach that directly integrates IMU measurements with multibody dynamic models via an Iterated Extended Kalman Filter (IEKF) to simultaneously estimate the system's kinematics and kinetics. By enforcing kinematic and kinetic properties and utilizing only accelerometer and gyroscope data, our method improves IMU-based state estimation accuracy. Our approach is designed to allow for incorporating additional sensor data, such as optical MoCap measurements and joint torque readings, to further enhance estimation accuracy. We validated our approach using highly accurate ground truth data from a 3 Degree of Freedom (DoF) pendulum and a 6 DoF Kuka robot. We demonstrate a maximum Root Mean Square Difference (RMSD) in the pendulum's computed joint angles of 3.75 degrees compared to optical MoCap Inverse Kinematics (IK), which serves as the gold standard in the absence of internal encoders. For the Kuka robot, we observe a maximum joint angle RMSD of 3.24 degrees compared to the Kuka's internal encoders, while the maximum joint angle RMSD of the optical MoCap IK compared to the encoders was 1.16 degrees. Additionally, we report a maximum joint torque RMSD of 2 Nm in the pendulum compared to optical MoCap Inverse Dynamics (ID), and 3.73 Nm in the Kuka robot relative to its internal torque sensors.


Identification and validation of the dynamic model of a tendon-driven anthropomorphic finger

Li, Junnan, Chen, Lingyun, Ringwald, Johannes, Fortunic, Edmundo Pozo, Ganguly, Amartya, Haddadin, Sami

arXiv.org Artificial Intelligence

This study addresses the absence of an identification framework to quantify a comprehensive dynamic model of human and anthropomorphic tendon-driven fingers, which is necessary to investigate the physiological properties of human fingers and improve the control of robotic hands. First, a generalized dynamic model was formulated, which takes into account the inherent properties of such a mechanical system. This includes rigid-body dynamics, coupling matrix, joint viscoelasticity, and tendon friction. Then, we propose a methodology comprising a series of experiments, for step-wise identification and validation of this dynamic model. Moreover, an experimental setup was designed and constructed that features actuation modules and peripheral sensors to facilitate the identification process. To verify the proposed methodology, a 3D-printed robotic finger based on the index finger design of the Dexmart hand was developed, and the proposed experiments were executed to identify and validate its dynamic model. This study could be extended to explore the identification of cadaver hands, aiming for a consistent dataset from a single cadaver specimen to improve the development of musculoskeletal hand models.


Gel-OPTOFORT Sensor: Multi-axis Force/Torque Measurement and Geometry Observation Using GelSight and Optoelectronic Sensor Technology

Noh, Yohan, Upare, Harshal, Osman, Dalia, Li, Wanlin

arXiv.org Artificial Intelligence

Although conventional GelSight-based tactile and force/torque sensors excel in detecting objects' geometry and texture information while simultaneously sensing multi-axis forces, their performance is limited by the camera's lower frame rates and the inherent properties of the elastomer. These limitations restrict their ability to measure higher force ranges at high sampling frequencies. Besides, due to the coupling of the Gelsight sensor unit and multi-axis force/torque unit structurally, the force/torque measurement ranges of the Gelsight-based force/torque sensors are not adjustable. To address these weaknesses, this paper proposes the GEL-OPTOFORT sensor that combines a GelSight sensor and an optoelectronic sensor-based force/torque sensor.


Current-Based Impedance Control for Interacting with Mobile Manipulators

de Wolde, Jelmer, Knoedler, Luzia, Garofalo, Gianluca, Alonso-Mora, Javier

arXiv.org Artificial Intelligence

As robots shift from industrial to human-centered spaces, adopting mobile manipulators, which expand workspace capabilities, becomes crucial. In these settings, seamless interaction with humans necessitates compliant control. Two common methods for safe interaction, admittance, and impedance control, require force or torque sensors, often absent in lower-cost or lightweight robots. This paper presents an adaption of impedance control that can be used on current-controlled robots without the use of force or torque sensors and its application for compliant control of a mobile manipulator. A calibration method is designed that enables estimation of the actuators' current/torque ratios and frictions, used by the adapted impedance controller, and that can handle model errors. The calibration method and the performance of the designed controller are experimentally validated using the Kinova GEN3 Lite arm. Results show that the calibration method is consistent and that the designed controller for the arm is compliant while also being able to track targets with five-millimeter precision when no interaction is present. Additionally, this paper presents two operational modes for interacting with the mobile manipulator: one for guiding the robot around the workspace through interacting with the arm and another for executing a tracking task, both maintaining compliance to external forces. These operational modes were tested in real-world experiments, affirming their practical applicability and effectiveness.


Customizing Textile and Tactile Skins for Interactive Industrial Robots

Su, Bo Ying, Wei, Zhongqi, McCann, James, Yuan, Wenzhen, Liu, Changliu

arXiv.org Artificial Intelligence

Tactile skins made from textiles enhance robot-human interaction by localizing contact points and measuring contact forces. This paper presents a solution for rapidly fabricating, calibrating, and deploying these skins on industrial robot arms. The novel automated skin calibration procedure maps skin locations to robot geometry and calibrates contact force. Through experiments on a FANUC LR Mate 200id/7L industrial robot, we demonstrate that tactile skins made from textiles can be effectively used for human-robot interaction in industrial environments, and can provide unique opportunities in robot control and learning, making them a promising technology for enhancing robot perception and interaction.


Force/Torque Sensing for Soft Grippers using an External Camera

Collins, Jeremy A., Grady, Patrick, Kemp, Charles C.

arXiv.org Artificial Intelligence

Robotic manipulation can benefit from wrist-mounted force/torque (F/T) sensors, but conventional F/T sensors can be expensive, difficult to install, and damaged by high loads. We present Visual Force/Torque Sensing (VFTS), a method that visually estimates the 6-axis F/T measurement that would be reported by a conventional F/T sensor. In contrast to approaches that sense loads using internal cameras placed behind soft exterior surfaces, our approach uses an external camera with a fisheye lens that observes a soft gripper. VFTS includes a deep learning model that takes a single RGB image as input and outputs a 6-axis F/T estimate. We trained the model with sensor data collected while teleoperating a robot (Stretch RE1 from Hello Robot Inc.) to perform manipulation tasks. VFTS outperformed F/T estimates based on motor currents, generalized to a novel home environment, and supported three autonomous tasks relevant to healthcare: grasping a blanket, pulling a blanket over a manikin, and cleaning a manikin's limbs. VFTS also performed well with a manually operated pneumatic gripper. Overall, our results suggest that an external camera observing a soft gripper can perform useful visual force/torque sensing for a variety of manipulation tasks.


Robot to Human Object Handover using Vision and Joint Torque Sensor Modalities

Mohandes, Mohammadhadi, Moradi, Behnam, Gupta, Kamal, Mehrandezh, Mehran

arXiv.org Artificial Intelligence

We present a robot-to-human object handover algorithm and implement it on a 7-DOF arm equipped with a 3-finger mechanical hand. The system performs a fully autonomous and robust object handover to a human receiver in real-time. Our algorithm relies on two complementary sensor modalities: joint torque sensors on the arm and an eye-in-hand RGB-D camera for sensor feedback. Our approach is entirely implicit, i.e., there is no explicit communication between the robot and the human receiver. Information obtained via the aforementioned sensor modalities is used as inputs to their related deep neural networks. While the torque sensor network detects the human receiver's "intention" such as: pull, hold, or bump, the vision sensor network detects if the receiver's fingers have wrapped around the object. Networks' outputs are then fused, based on which a decision is made to either release the object or not. Despite substantive challenges in sensor feedback synchronization, object, and human hand detection, our system achieves robust robot-to-human handover with 98\% accuracy in our preliminary real experiments using human receivers.


Researchers boost robotic arm movement by adding a sense of touch

Engadget

Nathan Copeland knows a thing or two about brain implants. More than a decade after a car crash left him paralyzed from the chest down, Copeland enrolled in a medical trial that helped him to regain his sense of touch. The breakthrough saw scientists implant chips in his brain that allowed him to control a robotic hand. Now, in his mid-30s, he's become the focal point of another scientific breakthrough. Thanks to a new brain interface experiment, Copeland was able to feel the sensation of touch when his robotic hand came into contact with a surface or object.